seo

8 More SEO Topics that Have Me Stumped

A couple years ago, I posed some questions from the SEO world that I couldn’t answer. Tonight, I’d like to repeat that process and throw up some dilemmas that, once again, have me in a quandry.

  1. Does QDD Give an Inherent Boost to Negative Subject Matter?
    Some rumors have been passing around the SEO world that QDD, Google’s “diversity” algorithm that’s intended to show more of a variety of different kinds of content in page one of the SERPs,Β gives aΒ rankings boost to negative focused content, making itΒ easier to rank for a company/brand/person’s nameΒ if you “talk smack” about them. True or false?Β 
  2. The 302 Hijack is Back?
    From 2005-2007, complaints were flying about Google showing pages that 302 re-directed to other URLs in place of the originals, allowing clever spammers to conditionally 302, then steal the traffic when actual visitors clicked the ranking search result. 2008 was pretty quiet on this topic, but recently, I’ve seen it flare up again. What happened? Did Google make a misstep? Or is there a new, more sinister version of this loose on the web that gets around the old protections somehow?
  3. Are the Engines Following URL Shortening Services?
    Some recent evidence has suggested that engines are using URLs from shortening services like TinyURL, Is.gd and others (even those that don’t cleanly 301 as Zi.ma does) for discovery, and possibly flowing link juice through them as well. Has anyone experienced this, tested it or seen results that would prove the case one way or another?
  4. What Was Up with The Hyves Subdomain PR Penalty Checker?
    Marcus Tandler leaked the news that using the subdomain “hyves” attached to any root domain on the web would give you a PageRank number indicating whether the domain had suffered a PR penalty for buying/selling links. So many questions on this one – who leaked it? It had to be someone inside Google, right? No one could guess that randomly without at least hearing a whisper from the grapevine. Why would Google create it? Why would they leave it active? Why would they make it publicly accessible in the first place? Every search quality engineer has access to the console to pull up their internal stats on a domain, so they could easily mark it there… So weird.
  5. The Engines All Regularly Follow Many More than 100 Links Per Page?
    We work on a lot of pretty authoritative, powerful domains, so I’m wondering if this is just a fluke from being in a link-rich environment, but we see that Google, Yahoo! and Live/MSN are consistently going up above 300 links per page and following and indexing them all. Is this a limited behavior set, or do small sites, newer sites and those of you with test domains see this activity as well?
  6. Is it Really Harder to Get Rankings with a .info, .cc, or .biz Extension?
    We’ve heard from a few sources that these three extensions, along with several other International ccTLDs, might be lowering trust scores or increasing the probability that your site is flagged for exclusion/devaluation/penalties. I haven’t done nearly enough testing on domains like this to know. Of course, I don’t think they’re good for branding, which is a big part of a long term web marketing strategy overall, but that’s beside the point.
  7. Does Google Employ Link Buying Moles?
    I’ve now heard tales from 2 different companies, one very prominent in the industry, about all their clients being manually penalized by Google and their link networks and link buying sources identified with immense precision, as though there was an insider leaking the data. I generally have a tough time believing this, since Google usually likes to do things in a very scalable fashion, and hiring moles to spy on link buying activity seems to my mind a very low ROI and bandwidth intensiveΒ endeavor. Have you heard/seen anything that would sway you definitively one way or the other? Is Google really conducting corporate espionage with those who would violate its quality guidelines?
  8. Will Anchor Text Value Pass Through Terribly Low Quality Links?
    A friend of mine recently hypothesived that while link juice might be compromised or even discounted entirely from spammy, low quality domains and pages, anchor text value could still pass. This phenomenon supposedly is why more aggressive SEOs are buying/acquiring tons of super low quality links from crummy directories, old sites that have lost most of their PR and open comment spam areas. Is there any truth to this? Why would an engine discount query independent metrics like link juice but continue passing anchor text value through links?

Hopefully, you’ve got more answers (and evidence) than I.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button